Ascent-based Monte Carlo expectation– maximization
نویسندگان
چکیده
The expectation–maximization (EM) algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and high dimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Carlo methods to estimate the relevant integrals.Typically, a very large Monte Carlo sample size is required to estimate these integrals within an acceptable tolerance when the algorithm is near convergence. Even if this sample size were known at the onset of implementation of MCEM, its use throughout all iterations is wasteful, especially when accurate starting values are not available. We propose a data-driven strategy for controlling Monte Carlo resources in MCEM. The algorithm proposed improves on similar existing methods by recovering EM’s ascent (i.e. likelihood increasing) property with high probability, being more robust to the effect of user-defined inputs and handling classical Monte Carlo and Markov chain Monte Carlo methods within a common framework. Because of the first of these properties we refer to the algorithm as ‘ascent-based MCEM’. We apply ascent-based MCEM to a variety of examples, including one where it is used to accelerate the convergence of deterministic EM dramatically.
منابع مشابه
Ascent-Based Monte Carlo EM
The EM algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and high-dimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Carlo methods to estimate the relevant integrals. Typically, a very large Monte Carlo sample size i...
متن کاملLearning Sigmoid Belief Networks via Monte Carlo Expectation Maximization
Belief networks are commonly used generative models of data, but require expensive posterior estimation to train and test the model. Learning typically proceeds by posterior sampling, variational approximations, or recognition networks, combined with stochastic optimization. We propose using an online Monte Carlo expectationmaximization (MCEM) algorithm to learn the maximum a posteriori (MAP) e...
متن کاملEstimating noise from noisy speech features with a monte carlo variant of the expectation maximization algorithm
In this work, we derive a Monte Carlo expectation maximization algorithm for estimating noise from a noisy utterance. In contrast to earlier approaches, where the distribution of noise was estimated based on a vector Taylor series expansion, we use a combination of importance sampling and Parzen-window density estimation to numerically approximate the occurring integrals with the Monte Carlo me...
متن کاملMonte Carlo Expectation Maximization with Hidden Markov Models to Detect Functional Networks in Resting-State fMRI
We propose a novel Bayesian framework for partitioning the cortex into distinct functional networks based on resting-state fMRI. Spatial coherence within the network clusters is modeled using a hidden Markov randomfield prior. The normalized time-series data, which lie on a high-dimensional sphere, are modeled with a mixture of von Mises-Fisher distributions. To estimate the parameters of this ...
متن کاملQuasi-Monte Carlo sampling to improve the efficiency of Monte Carlo EM
In this paper we investigate an efficient implementation of the Monte Carlo EM algorithm based on Quasi-Monte Carlo sampling. The Monte Carlo EM algorithm is a stochastic version of the deterministic EM (Expectation-Maximization) algorithm in which an intractable E-step is replaced by a Monte Carlo approximation. Quasi-Monte Carlo methods produce deterministic sequences of points that can signi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005